Learning Algorithms for Separable Approximations of Discrete Stochastic Optimization Problems
نویسندگان
چکیده
We propose the use of sequences of separable, piecewise linear approximations for solving nondifferentiable stochastic optimization problems. The approximations are constructed adaptively using a combination of stochastic subgradient information and possibly sample information on the objective function itself. We prove the convergence of several versions of such methods when the objective function is separable and has integer break points, and we illustrate their behavior on numerical examples. We then demonstrate the performance on nonseparable problems that arise in the context of two-stage stochastic programming problems, and demonstrate that these techniques provide near-optimal solutions with a very fast rate of convergence compared with other solution techniques.
منابع مشابه
A Discrete Hybrid Teaching-Learning-Based Optimization algorithm for optimization of space trusses
In this study, to enhance the optimization process, especially in the structural engineering field two well-known algorithms are merged together in order to achieve an improved hybrid algorithm. These two algorithms are Teaching-Learning Based Optimization (TLBO) and Harmony Search (HS) which have been used by most researchers in varied fields of science. The hybridized algorithm is called A Di...
متن کاملA novel technique for a class of singular boundary value problems
In this paper, Lagrange interpolation in Chebyshev-Gauss-Lobatto nodes is used to develop a procedure for finding discrete and continuous approximate solutions of a singular boundary value problem. At first, a continuous time optimization problem related to the original singular boundary value problem is proposed. Then, using the Chebyshev- Gauss-Lobatto nodes, we convert the continuous time op...
متن کاملPERFORMANCE OF DIFFERENT ANT-BASED ALGORITHMS FOR OPTIMIZATION OF MIXED VARIABLE DOMAIN IN CIVIL ENGINEERING DESIGNS
Ant colony optimization algorithms (ACOs) have been basically introduced to discrete variable problems and applied to different research domains in several engineering fields. Meanwhile, abundant studies have been already involved to adapt different ant models to continuous search spaces. Assessments indicate competitive performance of ACOs on discrete or continuous domains. Therefore, as poten...
متن کاملHAMSI: Distributed Incremental Optimization Algorithm Using Quadratic Approximations for Partially Separable Problems
We present HAMSI, a provably convergent incremental algorithm for solving large-scale partially separable optimization problems that frequently emerge in machine learning and inferential statistics. The algorithm is based on a local quadratic approximation and hence allows incorporating a second order curvature information to speed-up the convergence. Furthermore, HAMSI needs almost no tuning, ...
متن کاملTesting Soccer League Competition Algorithm in Comparison with Ten Popular Meta-heuristic Algorithms for Sizing Optimization of Truss Structures
Recently, many meta-heuristic algorithms are proposed for optimization of various problems. Some of them originally are presented for continuous optimization problems and some others are just applicable for discrete ones. In the literature, sizing optimization of truss structures is one of the discrete optimization problems which is solved by many meta-heuristic algorithms. In this paper, in or...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Math. Oper. Res.
دوره 29 شماره
صفحات -
تاریخ انتشار 2004